38,752 research outputs found

    Automatic Liver Lesion Segmentation Using A Deep Convolutional Neural Network Method

    Full text link
    Liver lesion segmentation is an important step for liver cancer diagnosis, treatment planning and treatment evaluation. LiTS (Liver Tumor Segmentation Challenge) provides a common testbed for comparing different automatic liver lesion segmentation methods. We participate in this challenge by developing a deep convolutional neural network (DCNN) method. The particular DCNN model works in 2.5D in that it takes a stack of adjacent slices as input and produces the segmentation map corresponding to the center slice. The model has 32 layers in total and makes use of both long range concatenation connections of U-Net [1] and short-range residual connections from ResNet [2]. The model was trained using the 130 LiTS training datasets and achieved an average Dice score of 0.67 when evaluated on the 70 test CT scans, which ranked first for the LiTS challenge at the time of the ISBI 2017 conference.Comment: Submission for ISBI'2017 LiTS Challenge ISIC201

    On Kernel Mengerian Orientations of Line Multigraphs

    Full text link
    We present a polyhedral description of kernels in orientations of line multigraphs. Given a digraph DD, let FK(D)FK(D) denote the fractional kernel polytope defined on DD, and let Οƒ(D){\sigma}(D) denote the linear system defining FK(D)FK(D). A digraph DD is called kernel perfect if every induced subdigraph Dβ€²D^\prime has a kernel, called kernel ideal if FK(Dβ€²)FK(D^\prime) is integral for each induced subdigraph Dβ€²D^\prime, and called kernel Mengerian if Οƒ(Dβ€²){\sigma} (D^\prime) is TDI for each induced subdigraph Dβ€²D^\prime. We show that an orientation of a line multigraph is kernel perfect iff it is kernel ideal iff it is kernel Mengerian. Our result strengthens the theorem of Borodin et al. [3] on kernel perfect digraphs and generalizes the theorem of Kiraly and Pap [7] on stable matching problem.Comment: 12 pages, corrected and slightly expanded versio

    On the coherent Hopf 2-algebras

    Full text link
    We construct a coherent Hopf 2-algebra as quantization of a coherent 2-group, which consists of two Hopf coquasigroups and a coassociator. For this constructive method, if we replace Hopf coquasigroups by Hopf algebras, we can construct a strict Hoft 2-algebra, which is a quantisation of 2-group. We also study the crossed comodule of Hopf algebras, which is shown to be a strict Hopf 2-algebra under some conditions. As an example, a quasi coassociative Hopf coquasigroup is employed to build a special coherent Hopf 2-algebra with nontrivial coassociator. Following this we study functions on Cayley algebra basis.Comment: 36 page

    NDT: Neual Decision Tree Towards Fully Functioned Neural Graph

    Full text link
    Though traditional algorithms could be embedded into neural architectures with the proposed principle of \cite{xiao2017hungarian}, the variables that only occur in the condition of branch could not be updated as a special case. To tackle this issue, we multiply the conditioned branches with Dirac symbol (i.e. 1x>0\mathbf{1}_{x>0}), then approximate Dirac symbol with the continuous functions (e.g. 1βˆ’eβˆ’Ξ±βˆ£x∣1 - e^{-\alpha|x|}). In this way, the gradients of condition-specific variables could be worked out in the back-propagation process, approximately, making a fully functioned neural graph. Within our novel principle, we propose the neural decision tree \textbf{(NDT)}, which takes simplified neural networks as decision function in each branch and employs complex neural networks to generate the output in each leaf. Extensive experiments verify our theoretical analysis and demonstrate the effectiveness of our model.Comment: This is the draft paper. I will refine the paper until accepte

    Knowledge Recognition Algorithm enables P = NP

    Full text link
    This paper introduces a knowledge recognition algorithm (KRA) that is both a Turing machine algorithm and an Oracle Turing machine algorithm. By definition KRA is a non-deterministic language recognition algorithm. Simultaneously it can be implemented as a deterministic Turing machine algorithm. KRA applies mirrored perceptual-conceptual languages to learn member-class relations between the two languages iteratively and retrieve information through deductive and reductive recognition from one language to another. The novelty of KRA is that the conventional concept of relation is adjusted. The computation therefore becomes efficient bidirectional string mapping

    R_b Constraints on Littlest Higgs Model with T-parity

    Full text link
    In the framework of the littlest Higgs model with T-parity (LHT), we study the contributions of the T-even and T-odd particles to the branching ratio R_b. We find that the precision data of R_b can give strong constraints on the masses of T-odd fermions.Comment: 11 pages, 5 figure

    Margin-Based Feed-Forward Neural Network Classifiers

    Full text link
    Margin-Based Principle has been proposed for a long time, it has been proved that this principle could reduce the structural risk and improve the performance in both theoretical and practical aspects. Meanwhile, feed-forward neural network is a traditional classifier, which is very hot at present with a deeper architecture. However, the training algorithm of feed-forward neural network is developed and generated from Widrow-Hoff Principle that means to minimize the squared error. In this paper, we propose a new training algorithm for feed-forward neural networks based on Margin-Based Principle, which could effectively promote the accuracy and generalization ability of neural network classifiers with less labelled samples and flexible network. We have conducted experiments on four UCI open datasets and achieved good results as expected. In conclusion, our model could handle more sparse labelled and more high-dimension dataset in a high accuracy while modification from old ANN method to our method is easy and almost free of work.Comment: This paper has been published in ICANN 2015: International Conference on Artificial Neural Networks, Amsterdam, The Netherlands, (May 14-15, 2015

    Finiteness of small factor analysis models

    Full text link
    We consider small factor analysis models with one or two factors. Fixing the number of factors, we prove a finiteness result about the covariance matrix parameter space when the size of the covariance matrix increases. According to this result, there exists a distinguished matrix size starting at which one can determine whether a given covariance matrix belongs to the parameter space by determining whether all principal submatrices of the distinguished size belong to the corresponding parameter space. We show that the distinguished matrix size is equal to four in the one-factor model and six with two factors

    Smoothness of Gaussian conditional independence models

    Full text link
    Conditional independence in a multivariate normal (or Gaussian) distribution is characterized by the vanishing of subdeterminants of the distribution's covariance matrix. Gaussian conditional independence models thus correspond to algebraic subsets of the cone of positive definite matrices. For statistical inference in such models it is important to know whether or not the model contains singularities. We study this issue in models involving up to four random variables. In particular, we give examples of conditional independence relations which, despite being probabilistically representable, yield models that non-trivially decompose into a finite union of several smooth submodels

    Max-Entropy Feed-Forward Clustering Neural Network

    Full text link
    The outputs of non-linear feed-forward neural network are positive, which could be treated as probability when they are normalized to one. If we take Entropy-Based Principle into consideration, the outputs for each sample could be represented as the distribution of this sample for different clusters. Entropy-Based Principle is the principle with which we could estimate the unknown distribution under some limited conditions. As this paper defines two processes in Feed-Forward Neural Network, our limited condition is the abstracted features of samples which are worked out in the abstraction process. And the final outputs are the probability distribution for different clusters in the clustering process. As Entropy-Based Principle is considered into the feed-forward neural network, a clustering method is born. We have conducted some experiments on six open UCI datasets, comparing with a few baselines and applied purity as the measurement . The results illustrate that our method outperforms all the other baselines that are most popular clustering methods.Comment: This paper has been published in ICANN 201
    • …
    corecore